Boosting Algorithms for Maximizing the Soft Margin

نویسندگان

  • Manfred K. Warmuth
  • Karen A. Glocer
  • Gunnar Rätsch
چکیده

Algorithm 1: SoftBoost 1. Input: S = 〈(x1, y1), . . . , (xN , yN )〉, desired accuracy δ, and capping parameter ν ∈ [1, N ]. 2. Initialize: dn to the uniform distribution 3.Do for t = 1, . . . (a) Train classifier on dt−1 and {u1, . . . ,ut−1} and obtain hypothesis ht. Set un = h(xn)yn. (b) Calculate the edge γt of ht : γt = dt · ut (c) Set γ̂t = (minm=1...t γm)− δ (d) Set γ∗ = solution to the primal linear programming problem. (e) Update

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Duality View of Boosting Algorithms

We study boosting algorithms from a new perspective. We show that the Lagrange dual problems of AdaBoost, LogitBoost and soft-margin LPBoost with generalized hinge loss are all entropy maximization problems. By looking at the dual problems of these boosting algorithms, we show that the success of boosting algorithms can be understood in terms of maintaining a better margin distribution by maxim...

متن کامل

Non-convex boosting with minimum margin guarantees

Many classification algorithms achieve poor generalization accuracy on “noisy” data sets. We introduce a new non-convex boosting algorithm BrownBoost-δ, a noiseresistant booster, that is able to significantly increase accuracy on a set of noisy classification problems. Our algorithm consistently outperforms the original BrownBoost algorithm, AdaBoost, and LogitBoost on simulated and real data. ...

متن کامل

Speed and Sparsity of Regularized Boosting

Boosting algorithms with l1-regularization are of interest because l1 regularization leads to sparser composite classifiers. Moreover, Rosset et al. have shown that for separable data, standard lpregularized loss minimization results in a margin maximizing classifier in the limit as regularization is relaxed. For the case p = 1, we extend these results by obtaining explicit convergence bounds o...

متن کامل

Regularizing AdaBoost

Boosting methods maximize a hard classiication margin and are known as powerful techniques that do not exhibit overrtting for low noise cases. Also for noisy data boosting will try to enforce a hard margin and thereby give too much weight to outliers, which then leads to the dilemma of non-smooth ts and overrtting. Therefore we propose three algorithms to allow for soft margin classiication by ...

متن کامل

Boosting as a Regularized Path to a Maximum Margin Classifier

In this paper we study boosting methods from a new perspective. We build on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an l1 constraint on the coefficient vector. This helps understand the success of boosting with early stopping as regularized fitting of the loss criterion. For the two most commonly used criteria...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007